Weakly supervised video anomaly detection (WSVAD) is a challenging task since only video-level labels are available for training. In previous studies, the discriminative power of the learned features is not strong enough, and the data imbalance resulting from the mini-batch training strategy is ignored. To address these two issues, we propose a novel WSVAD method based on cross-batch clustering guidance. To enhance the discriminative power of features, we propose a batch clustering based loss to encourage a clustering branch to generate distinct normal and abnormal clusters based on a batch of data. Meanwhile, we design a cross-batch learning strategy by introducing clustering results from previous mini-batches to reduce the impact of data imbalance. In addition, we propose to generate more accurate segment-level anomaly scores based on batch clustering guidance further improving the performance of WSVAD. Extensive experiments on two public datasets demonstrate the effectiveness of our approach.
translated by 谷歌翻译
视频异常检测旨在在视频中找到不符合预期行为的事件。普遍的方法主要通过摘要重建或将来的框架预测误差来检测异常。但是,该错误高度依赖于当前摘要的局部环境,并且缺乏对正态性的理解。为了解决这个问题,我们建议不仅通过本地环境来检测异常事件,而且还根据测试事件与培训数据正常的知识之间的一致性。具体而言,我们提出了一个基于上下文恢复和知识检索的新颖的两流框架,这两个流可以相互补充。对于上下文恢复流,我们提出了一个时空的U-NET,可以完全利用运动信息来预测未来的框架。此外,我们提出了一种最大的局部误差机制,以减轻复杂前景对象引起的大恢复错误的问题。对于知识检索流,我们提出了一种改进的可学习区域敏感性散列的散列,该哈希通过暹罗网络和相互差异损失来优化哈希功能。关于正态性的知识是编码和存储在哈希表中的,测试事件与知识表示之间的距离用于揭示异常的概率。最后,我们融合了从两个流的异常得分以检测异常。广泛的实验证明了这两个流的有效性和互补性,因此提出的两流框架在四个数据集上实现了最新的性能。
translated by 谷歌翻译
对于弱监督的异常检测,由于无法对长期上下文信息进行建模,大多数现有工作仅限于视频表示不足的问题。为了解决这个问题,我们提出了一个新型弱监督的自适应图卷积网络(WAGCN),以模拟视频片段之间复杂的上下文关系。通过此,我们完全考虑了其他视频片段对当前段的影响,在为每个段的异常概率分数生成当前段。首先,我们结合了视频片段的时间一致性以及功能相似性来构建全局图,该图可以充分利用视频中异常事件的时空特征之间的关联信息。其次,我们提出了一个图形学习层,以打破手动设置拓扑的限制,该拓扑可以根据数据自适应地提取图形邻接矩阵。在两个公共数据集(即UCF-Crime数据集和Shanghaitech数据集)上进行了广泛的实验,证明了我们的方法的有效性,从而实现了最先进的性能。
translated by 谷歌翻译
视频异常检测(VAD)主要是指识别在训练集中没有发生的异常事件,其中只有正常样本可用。现有的作品通常将VAD制定为重建或预测问题。然而,这些方法的适应性和可扩展性受到限制。在本文中,我们提出了一种新颖的基于距离的VAD方法,可以有效和灵活地利用所有可用的正常数据。在我们的方法中,测试样本和正常样本之间的距离越小,测试样本正常的概率越高。具体地,我们建议将位置敏感的散列(LSH)使用以预先将其相似度超过特定阈值的样本进行地图。以这种方式,近邻搜索的复杂性显着减少。为了使语义上类似的样本更接近和样本不相似,我们提出了一种新颖的学习版LSH,将LSH嵌入神经网络,并优化具有对比学习策略的哈希功能。该方法对数据不平衡具有鲁棒性,并且可以灵活地处理正常数据的大型类内变化。此外,它具有良好的可扩展性能力。广泛的实验证明了我们的方法的优势,这实现了Vad基准的新型结果。
translated by 谷歌翻译
少量学习是一个基本和挑战性的问题,因为它需要识别只有几个例子的新型类别。识别对象具有多个变体,可以定位图像中的任何位置。直接将查询图像与示例图像进行比较无法处理内容未对准。比较的表示和度量是至关重要的,但由于在几次拍摄学习中的样本的稀缺和广泛变化而挑战。在本文中,我们提出了一种新颖的语义对齐模型来比较关系,这是对内容未对准的强大。我们建议为现有的几次射门学习框架添加两个关键成分,以获得更好的特征和度量学习能力。首先,我们介绍了语义对齐损失,以对准属于同一类别的样本的功能的关系统计。其次,引入了本地和全局互动信息,允许在图像中的结构位置包含本地一致和类别共享信息的表示。第三,我们通过考虑每个流的同性恋的不确定性来介绍一个原则的方法来称量多重损失功能。我们对几个几次拍摄的学习数据集进行了广泛的实验。实验结果表明,该方法能够比较与语义对准策略的关系,实现最先进的性能。
translated by 谷歌翻译
Decompilation aims to transform a low-level program language (LPL) (eg., binary file) into its functionally-equivalent high-level program language (HPL) (e.g., C/C++). It is a core technology in software security, especially in vulnerability discovery and malware analysis. In recent years, with the successful application of neural machine translation (NMT) models in natural language processing (NLP), researchers have tried to build neural decompilers by borrowing the idea of NMT. They formulate the decompilation process as a translation problem between LPL and HPL, aiming to reduce the human cost required to develop decompilation tools and improve their generalizability. However, state-of-the-art learning-based decompilers do not cope well with compiler-optimized binaries. Since real-world binaries are mostly compiler-optimized, decompilers that do not consider optimized binaries have limited practical significance. In this paper, we propose a novel learning-based approach named NeurDP, that targets compiler-optimized binaries. NeurDP uses a graph neural network (GNN) model to convert LPL to an intermediate representation (IR), which bridges the gap between source code and optimized binary. We also design an Optimized Translation Unit (OTU) to split functions into smaller code fragments for better translation performance. Evaluation results on datasets containing various types of statements show that NeurDP can decompile optimized binaries with 45.21% higher accuracy than state-of-the-art neural decompilation frameworks.
translated by 谷歌翻译
Nearest-Neighbor (NN) classification has been proven as a simple and effective approach for few-shot learning. The query data can be classified efficiently by finding the nearest support class based on features extracted by pretrained deep models. However, NN-based methods are sensitive to the data distribution and may produce false prediction if the samples in the support set happen to lie around the distribution boundary of different classes. To solve this issue, we present P3DC-Shot, an improved nearest-neighbor based few-shot classification method empowered by prior-driven data calibration. Inspired by the distribution calibration technique which utilizes the distribution or statistics of the base classes to calibrate the data for few-shot tasks, we propose a novel discrete data calibration operation which is more suitable for NN-based few-shot classification. Specifically, we treat the prototypes representing each base class as priors and calibrate each support data based on its similarity to different base prototypes. Then, we perform NN classification using these discretely calibrated support data. Results from extensive experiments on various datasets show our efficient non-learning based method can outperform or at least comparable to SOTA methods which need additional learning steps.
translated by 谷歌翻译
In recent years, arbitrary image style transfer has attracted more and more attention. Given a pair of content and style images, a stylized one is hoped that retains the content from the former while catching style patterns from the latter. However, it is difficult to simultaneously keep well the trade-off between the content details and the style features. To stylize the image with sufficient style patterns, the content details may be damaged and sometimes the objects of images can not be distinguished clearly. For this reason, we present a new transformer-based method named STT for image style transfer and an edge loss which can enhance the content details apparently to avoid generating blurred results for excessive rendering on style features. Qualitative and quantitative experiments demonstrate that STT achieves comparable performance to state-of-the-art image style transfer methods while alleviating the content leak problem.
translated by 谷歌翻译
In contrast to the control-theoretic methods, the lack of stability guarantee remains a significant problem for model-free reinforcement learning (RL) methods. Jointly learning a policy and a Lyapunov function has recently become a promising approach to ensuring the whole system with a stability guarantee. However, the classical Lyapunov constraints researchers introduced cannot stabilize the system during the sampling-based optimization. Therefore, we propose the Adaptive Stability Certification (ASC), making the system reach sampling-based stability. Because the ASC condition can search for the optimal policy heuristically, we design the Adaptive Lyapunov-based Actor-Critic (ALAC) algorithm based on the ASC condition. Meanwhile, our algorithm avoids the optimization problem that a variety of constraints are coupled into the objective in current approaches. When evaluated on ten robotic tasks, our method achieves lower accumulated cost and fewer stability constraint violations than previous studies.
translated by 谷歌翻译
The surrogate loss of variational autoencoders (VAEs) poses various challenges to their training, inducing the imbalance between task fitting and representation inference. To avert this, the existing strategies for VAEs focus on adjusting the tradeoff by introducing hyperparameters, deriving a tighter bound under some mild assumptions, or decomposing the loss components per certain neural settings. VAEs still suffer from uncertain tradeoff learning.We propose a novel evolutionary variational autoencoder (eVAE) building on the variational information bottleneck (VIB) theory and integrative evolutionary neural learning. eVAE integrates a variational genetic algorithm into VAE with variational evolutionary operators including variational mutation, crossover, and evolution. Its inner-outer-joint training mechanism synergistically and dynamically generates and updates the uncertain tradeoff learning in the evidence lower bound (ELBO) without additional constraints. Apart from learning a lossy compression and representation of data under the VIB assumption, eVAE presents an evolutionary paradigm to tune critical factors of VAEs and deep neural networks and addresses the premature convergence and random search problem by integrating evolutionary optimization into deep learning. Experiments show that eVAE addresses the KL-vanishing problem for text generation with low reconstruction loss, generates all disentangled factors with sharp images, and improves the image generation quality,respectively. eVAE achieves better reconstruction loss, disentanglement, and generation-inference balance than its competitors.
translated by 谷歌翻译